-
Notifications
You must be signed in to change notification settings - Fork 572
fix(integrations): openai/openai-agents: convert input message format #5248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
fix(integrations): openai/openai-agents: convert input message format #5248
Conversation
…he schema we expect for the `gen_ai.request.messages`
…ntent and update message handling
…nAI message handling
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Bug Fixes 🐛
Documentation 📚
Internal Changes 🔧Release
Other
🤖 This preview updates automatically when you update the PR. |
sentrivana
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks ok to me, two things:
- Can we add a check to the tests that we're not modifying the user's messages? Either as a new test or just adding an assert to the tests added in this PR
- I assume there is no way to dedupe some of the trimming logic between OpenAI agents and OpenAI because the format is different?
Done
Looking into that. Cursor says no. But I'm not sure tbh |
… of full message dicts - Extract choice.message.content for gen_ai.response.text instead of model_dump() - Add separate gen_ai.response.tool_calls extraction for Chat Completions API - Handle audio transcripts in responses - Extract shared extract_response_output() to ai/utils.py for Responses API output - Refactor OpenAI and OpenAI Agents integrations to use shared utility
I did investigate that. I did eliminate code duplication for output messages, but input messages are actually different |
…AI messages Add transform_content_part() and transform_message_content() functions to standardize content part handling across all AI integrations. These functions transform various SDK-specific formats (OpenAI, Anthropic, Google, LangChain) into a unified format: - blob: base64-encoded binary data - uri: URL references (including file URIs) - file: file ID references Also adds get_modality_from_mime_type() helper to infer content modality (image/audio/video/document) from MIME types.
Replace local _convert_message_parts function with the shared transform_message_content function to deduplicate code across AI integrations.
sentry_sdk/ai/utils.py
Outdated
| return "image" # Default fallback for unknown types | ||
|
|
||
|
|
||
| def transform_content_part( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we create separate transform_openai_content_part(), transform_anthropic_content_part(), and so on, in addition to the generic function?
From my perspective, the heuristics we have for determining if a content block is openai, or anthropic, or another style, is best-effort and can break when input schemas evolve. Maybe a provider becomes more permissive in what they accept, etc ...
We can reduce the risk of trudging into the wrong code paths in client libraries for openai, anthropic, and so forth, by calling the specific methods like transform_openai_content_part where possible.
And in generic libraries like langchain we accept that the heuristics in the generic transform_content_part() is the best we can do.
Add dedicated transform functions for each AI SDK: - transform_openai_content_part() for OpenAI/LiteLLM image_url format - transform_anthropic_content_part() for Anthropic image/document format - transform_google_content_part() for Google GenAI inline_data/file_data - transform_generic_content_part() for LangChain-style generic format Refactor transform_content_part() to be a heuristic dispatcher that detects the format and delegates to the appropriate specific function. This allows integrations to use the specific function directly for better performance and clarity, while maintaining backward compatibility through the dispatcher for frameworks that can receive any format. Added 38 new unit tests for the SDK-specific functions.
Replace generic transform_message_content with the OpenAI-specific transform_openai_content_part function for better performance and clarity since we know OpenAI always uses the image_url format.
| @@ -0,0 +1,3 @@ | |||
| version = 1 | |||
| revision = 3 | |||
| requires-python = ">=3.13" | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accidentally committed uv.lock with wrong Python version
Low Severity
The newly added uv.lock file at the repository root specifies requires-python = ">=3.13", which conflicts with the project's actual Python support (python_requires=">=3.6" in setup.py). This appears to be a development environment artifact that was unintentionally committed. The minimal 3-line lock file is unusual for a root-level lock and doesn't match the project's configuration.
Description
Convert messages to common
gen_ai.request.messagesstructureIssues
Closes https://linear.app/getsentry/issue/TET-1633/redact-images-openai-openai-agents
Closes https://linear.app/getsentry/issue/TET-1639/openai-output-not-rendered-nicely